2,923 research outputs found

    A decision model for the efficient management of a conservation fund over time

    Get PDF
    An important task of conservation biology is to assist policy makers in the design of ecologically effective conservation strategies and instruments. Various decision rules and guidelines originate, e.g., from the Theory of Island Biography (MacArthur & Wilson, 1967) and metapopulation theory (Hanski, 1999). Designing effective strategies and instruments, however, is only part of the solution to problems of biodiversity conservation. In the real world, financial resources are scarce, and it is not only important that policies are ecologically effective but also that they are economically efficient, i.e. lead to maximum ecological benefit for a given resource input. Efficiency has been analysed, e.g. in the context of the spatial allocation of conservation funds (Wu & Bogess, 1999) and of the spatial design of compensation payments for biodiversity enhancing land–use measures (Wätzold & Drechsler, 2002). Decision analysis is a helpful tool for integrating knowledge from different disciplines and identifying optimal strategies and policies (e.g., Drechsler & Burgman, 2003). Methods of decision analysis, such as optimisation procedures, are often a core component of ecological–economic models that bring together ecological and economic knowledge via formal models (e.g., Ando et al., 1998, Drechsler & Wätzold, 2001, Johst et al., 2002). Such models do not only allow a static integration of economic and ecological aspects but also to describe the dynamics of ecological and economic systems in an integrated manner (Perrings, 2002). Examples of such dynamic modelling approaches are Richards et al. (1999), Costello & Polasky (2003) and Shogren et al. (2003). In the present paper we investigate a dynamic conservation management problem different from those of the above mentioned authors and tackle the problem of long–term conservation when future financial budgets are uncertain. The background for this problem is that many species can only survive if certain types of biodiversity–enhancing land–use measures are carried out on a regular basis, such as regularly mowing meadows to create habitat for butterflies (Settele & Henle, 2002). This means that funds have to be regularly available over time, because a temporal gap in the availability of funds may irrevocably drive a species to extinction. While over the last two decades or so a growing commitment of society and governments to conserve biodiversity could be observed, that in many cases also included the increasing provision of funds for this purpose, there are signs that this commitment is currently weakening. An example of such signs are opinion polls in some countries (e.g. Germany) showing that environmental and resource protection issues are given a lower priority by the general public than ten years ago. This implies that there is an increasing risk that conservation funds will be lower in the future than today either through a decrease in political support for such funds or through a decline in donations for private organisations that finance conservation funds. This risk forces governments and conservation organisations concerned by the long–term prevention of species loss to explore options which ensure that their policy aims will be achieved even if future funds are lower than today’s. Obviously, one important option is to save part of the current financial resources to counterbalance possible future budget cuts. In this context, the problem arises which proportion of the available budget should be spent now and which proportion later. In summary, there is the problem of efficiently allocating a conservation budget over time to maximise the survival probability of an endangered species, where the current budget is reasonably high and future conservation budgets are expected to decline in the medium term (although the size of these budgets is not known with any certainty). The aim of this paper is to address this problem on a conceptual level. To have a mechanism that is able to transfer current money to the future regardless of subsequent governments’ preferences and policies, we make the assumption that a conservation fund is being established that is independent of any future government’s decisions and administered by an independent agency with the time–consistent objective function to allocate financial resources over time such that the survival probability of an endangered species is maximised. The probability t of a population surviving T+1 periods, each of length t can be written as the product of the probabilities of surviving each individual period (the complete description of the model including a more in–depth discussion of the results than presented here can be found in Drechsler & Wätzold, 2003): where a is some species specific parameter and K(0) is the habitat capacity when no conservation measures are carried out (Lande, 1993; Grimm & Wissel, 2004). Conservation measures increase K(0) by t which costs an amount of money pt = bt with b constant. Parameter depends on the species and is inversely proportional to the coefficient of variation of the population growth rate (Lande, 1993; Grimm & Wissel, 2004). Each year an amount of money gt = ht + t is granted to the conservation manager where ht is the deterministic component and t c [–, +] is random and uniformly distributed to describe uncertainty in the future budgets. Money that is not spent can be moved into a fund Ft from which money can be drawn in later periods. The fund thus develops like Borrowing is excluded, such that in each period only up to an amount Ft + gt can be spent t = 0,...,T (3) In each period the conservation manager has to decide how much money (pt) to spend for conservation in the present period and how much to allocate into the fund F and save for future periods. This inter–temporal optimisation problem is solved via stochastic dynamic programming (e.g., Clark, 1990). Due to the constraint(3) the solution is not straightforward. In each period, two possible solutions may formally occur: a corner solution where all available money is spent (pt = Ft + gt) and an interior solution where less then that is spent and some money is transferred to the next period. It turns out that the optimal payment in a certain period t depends on the number l of consecutive periods following the present period that have an interior solution: One can see that the optimal payment increases with increasing fund Ft but decreases with increasing uncertainty in the grants. The latter has been shown by Leland (1968) in a 2–period model without constraint (3), denoted as “precautionary” saving and explained from the particular shape of the objective function. From eq. (4) one can also see that more money is saved when is large, i.e., when the aim is to conserve species with weakly fluctuating population growth. One can further show that it is optimal to allocate the payments as even over time as far as the constraint (3) allows. If, e.g., we have constantly decreasing grants it is optimal to save in the beginning and spend the saved money in the final periods. The problem now is that the number l depends on the future grants and if these are not known l is not known and can only be approximated by a probability distribution P(l). For the case where a negative trend is expected in the grants, such that gt = h0 – t + t we have determined P(l) and the expected optimal payment. It turned out that if the uncertainty in the grants is large or small compared to their deterministic trend one obtains a solution that is structurally similar to eq. (4), i.e. we have a situation of precautionary saving. In contrast, if the uncertainty was about of the order of magnitude of the trend we found cases where uncertainty increased the optimal payments. The reason is that the uncertainty has two contrary effects. The one is the standard "precautionary saving" effect caused by the shape of the benefit function. The other, opposing effect is that uncertainty may reduce the (expected) number l and thus increase the optimal payment. Sometimes the latter effect is stronger. However, we found strong evidence that the magnitude of such "precautionary spending" is negligibly small and for practical purposes we conclude that uncertainty generally reduces the optimal payment and more money should be saved

    An agglomeration payment for cost-effective biodiversity conservation in spatially structured landscapes

    Get PDF
    Compensation schemes in which land owners receive payments for voluntarily managing their land in a biodiversity-enhancing manner have become one of the most important instruments for biodiversity conservation worldwide. One key challenge when designing such schemes is to account for the spatial arrangement of habitats bearing in mind that for given total habitat area connected habitats are ecologically more valuable than isolated habitats. To integrate the spatial dimension in compensation schemes and based on the idea of an agglomeration bonus we consider a scheme in which land-owners only receive payments if managed patches are arranged in a specific spatial configuration. We compare the cost-effectiveness of agglomeration payments with spatially homogeneous payments on a conceptual level and for a real world case and find that efficiency gains of agglomeration payments are positive or zero but never negative. In the real world case, agglomeration payments lead to cost-savings of up to 70% compared to spatially homogeneous payments. --agglomeration bonus,biodiversity conservation,cost-effectiveness,ecologicaleconomic modelling,metapopulation,spatial heterogeneity

    New insight into the physics of iron pnictides from optical and penetration depth data

    Full text link
    We report theoretical values for the unscreened plasma frequencies Omega_p of several Fe pnictides obtained from DFT based calculations within the LDA and compare them with experimental plasma frequencies obtained from reflectivity data. The sizable renormalization observed for all considered compounds points to the presence of many-body effects beyond the LDA. From the large empirical background dielectric constant of about 12-15, we estimate a large arsenic polarizability of about 9.5 +- 1.2 Angstroem^3 where the details depend on the polarizabilities of the remaining ions taken from the literature. This large polarizability can significantly reduce the value of the Coulomb repulsion U_d about 4 eV on iron known from iron oxides to a level of 2 eV or below. In general, this result points to rather strong polaronic effects as suggested by G.A. Sawatzky et al., in Refs. arXiv:0808.1390 and arXiv:0811.0214 (Berciu et al.). Possible consequences for the conditions of a formation of bipolarons are discussed, too. From the extrapolated muon spin rotation penetration depth data at T= 0 and the experimental Omega_p we estimate the total coupling constant lambda_tot for the el-boson interaction within the Eliashberg-theory adopting a single band approximation. For LaFeAsO_0.9F_0.1 a weak to intermediately strong coupling regime and a quasi-clean limit behaviour are found. For a pronounced multiband case we obtain a constraint for various intraband coupling constants which in principle allows for a sizable strong coupling in bands with either slow electrons or holes.Comment: 34 pages, 10 figures, submitted to New Journal of Physics (30.01.2009

    Disorder-induced Spin Gap in the Zigzag Spin-1/2 Chain Cuprate Sr_{0.9}Ca_{0.1}CuO_2

    Full text link
    We report a comparative study of 63Cu Nuclear Magnetic Resonance spin lattice relaxation rates, T_1^{-1}, on undoped SrCuO_2 and Ca doped Sr_{0.9}Ca_{0.1}CuO_2 spin chain compounds. A temperature independent T_1^{-1} is observed for SrCuO_2 as expected for an S=1/2 Heisenberg chain. Surprisingly, we observe an exponential decrease of T_1^{-1} for T < 90,K in the Ca-doped sample evidencing the opening of a spin gap. The data analysis within the J_1-J_2 Heisenberg model employing density-matrix renormalization group calculations suggests an impurity driven small alternation of the J_2-exchange coupling as a possible cause of the spin gap.Comment: 4 pages, 4 figure

    The Ursinus Weekly, March 21, 1960

    Get PDF
    Lesher, Mackey, Holl to star in The Heiress • WRUC to begin broadcasting on Monday, April 4 • Committees listed for annual May Day Pageant May 7 • To crown queen at prom; Mardi gras theme planned • Ursinus mourns the passing of Dr. Alfred Wilcox • Campus Chest surpasses goal; Many attend show • Scholarship for St. Andrew\u27s is announced • Cathy Nicolai is named as new Weekly editor • Bell, book and candle try-outs on March 21, 23 • Y to hold forum on social work • Five U.C. alumni named to Who\u27s who in America • Editorial: Tribute; Open letter • Letters to the editor • Education • Sonnet on an editorial • Meandering: Part two • Bluffing game • Three Ursinus cagers are honored by Who\u27s who • Ursinus girls defeat Beaver by 67-31 score • Swimming team in good form at Penn and Temple • Knock at any dorm • MSGA holds meeting; Two freshmen charged • Beardwood Two is volleyball champ of intramuralshttps://digitalcommons.ursinus.edu/weekly/1359/thumbnail.jp

    Active diffusion and advection in Drosophila oocytes result from the interplay of actin and microtubules

    Get PDF
    Transport in cells occurs via a delicate interplay of passive and active processes, including diffusion, directed transport and advection. Despite progress in super-resolution microscopy, discriminating and quantifying these processes is a challenge, requiring tracking of rapidly moving, sub-diffraction objects in a crowded, noisy environment. Here we use Differential Dynamic Microscopy with different contrast mechanisms to provide a thorough characterization of the dynamics in the Drosophila oocyte. We study the movement of vesicles and the elusive motion of a cytoplasmic F-actin mesh, a known regulator of cytoplasmic flows. We find that cytoplasmic motility constitutes a combination of directed motion and random diffusion. While advection is mainly attributed to microtubules, we find that active diffusion is driven by the actin cytoskeleton, although it is also enhanced by the flow. We also find that an important dynamic link exists between vesicles and cytoplasmic F-actin motion, as recently suggested in mouse oocytes.MD and IMP were supported by the BBSRC, the Department of Zoology (Cambridge), the University of Cambridge, and an Isaac Newton Trust fellowship to MD. FG and RC acknowledge funding by the Italian Ministry of Education and Research, Futuro in Ricerca Project ANISOFT (RBFR125H0M) and by Fondazione CARIPLO-Regione Lombardia Project Light for Life (2016-0998)

    ToPoliNano and fiction: Design Tools for Field-coupled Nanocomputing

    Get PDF
    Field-coupled Nanocomputing (FCN) is a computing concept with several promising post-CMOS candidate implementations that offer tremendously low power dissipation and highest processing performance at the same time. Two of the manifold physical implementations are Quantum-dot Cellular Automata (QCA) and Nanomagnet Logic (NML). Both inherently come with domain-specific properties and design constraints that render established conventional design algorithms inapplicable. Accordingly, dedicated design tools for those technologies are required. This paper provides an overview of two leading examples of such tools, namely fiction and ToPoliNano. Both tools provide effective methods that cover aspects such as placement, routing, clocking, design rule checking, verification, and logical as well as physical simulation. By this, both freely available tools provide platforms for future research in the FCN domain

    Cooper pairs as bosons

    Full text link
    Although BCS pairs of fermions are known not to obey Bose-Einstein (BE) commutation relations nor BE statistics, we show how Cooper pairs (CPs), whether the simple original ones or the CPs recently generalized in a many-body Bethe-Salpeter approach, being clearly distinct from BCS pairs at least obey BE statistics. Hence, contrary to widespread popular belief, CPs can undergo BE condensation to account for superconductivity if charged, as well as for neutral-atom fermion superfluidity where CPs, but uncharged, are also expected to form.Comment: 8 pages, 2 figures, full biblio info adde

    Cosmological spacetimes balanced by a scale covariant scalar field

    Full text link
    A scale invariant, Weyl geometric, Lagrangian approach to cosmology is explored, with a a scalar field phi of (scale) weight -1 as a crucial ingredient besides classical matter \cite{Tann:Diss,Drechsler:Higgs}. For a particularly simple class of Weyl geometric models (called {\em Einstein-Weyl universes}) the Klein-Gordon equation for phi is explicitly solvable. In this case the energy-stress tensor of the scalar field consists of a vacuum-like term Lambda g_{mu nu} with variable coefficient Lambda, depending on matter density and spacetime geometry, and of a dark matter like term. Under certain assumptions on parameter constellations, the energy-stress tensor of the phi-field keeps Einstein-Weyl universes in locally stable equilibrium. A short glance at observational data, in particular supernovae Ia (Riess ea 2007), shows interesting empirical properties of these models.Comment: 28 pages, 1 figure, accepted by Foundations of Physic

    Predicting erythropoietin resistance in hemodialysis patients with type 2 diabetes

    Get PDF
    &lt;p&gt;Background: Resistance to ESAs (erythropoietin stimulating agents) is highly prevalent in hemodialysis patients with diabetes and associated with an increased mortality. The aim of this study was to identify predictors for ESA resistance and to develop a prediction model for the risk stratification in these patients.&lt;/p&gt; &lt;p&gt;Methods: A post-hoc analysis was conducted of the 4D study, including 1015 patients with type 2 diabetes undergoing hemodialysis. Determinants of ESA resistance were identified by univariate logistic regression analyses. Subsequently, multivariate models were performed with stepwise inclusion of significant predictors from clinical parameters, routine laboratory and specific biomarkers.&lt;/p&gt; &lt;p&gt;Results: In the model restricted to clinical parameters, male sex, shorter dialysis vintage, lower BMI, history of CHF, use of ACE-inhibitors and a higher heart rate were identified as independent predictors of ESA resistance. In regard to routine laboratory markers, lower albumin, lower iron saturation, higher creatinine and higher potassium levels were independently associated with ESA resistance. With respect to specific biomarkers, higher ADMA and CRP levels as well as lower Osteocalcin levels were predictors of ESA resistance.&lt;/p&gt; &lt;p&gt;Conclusions: Easily obtainable clinical parameters and routine laboratory parameters can predict ESA resistance in diabetic hemodialysis patients with good discrimination. Specific biomarkers did not meaningfully further improve the risk prediction of ESA resistance. Routinely assessed data can be used in clinical practice to stratify patients according to the risk of ESA resistance, which may help to assign appropriate treatment strategies.&lt;/p&gt
    • …
    corecore